Agile Logic Home

Weblog Index

Paul Hodgetts' Writings

Back to Agile Logic's Home Page

Do you have feedback or questions? If so, please send us email at feedback@agilelogic.com.
Agile Logic's Weblog

Welcome to Agile Logic's weblog! Here you'll find writings from Agile Logic's team of experts on topics ranging from agile processes, principles and patterns; to object-oriented design and design patterns; to enterprise Java and .NET technologies; to our experiences on our client's mission-critical projects.

You can subscribe to our weblog via RSS at http://www.agilelogic.com/weblog/?flav=rss    Subscribe to our RSS feed.
Our feed is also available via FeedBurner at http://feeds.feedburner.com/AgileLogic    View our RSS feed via FeedBurner.

Mar 19, 2005
Can We Be Sarbanes-Oxley Compliant with Scrum?

I did a little bit of investigating (brainstorming really) recently for a client on Sarbanes-Oxley (SOX )issues with Scrum. I didn't get to see it through to an audit, and I'm hardly an expert on Sox compliance. But, in case it helps, here's some results of that thinking...

At it's core, SOX requires that an organization maintains adequate controls over financial data and its access across the organization. The infamous section 404, requires that CEOs and CFOs sign off on that, with severe penalties if they are wrong. There's nothing like a scared CEO to make the development organization scramble in their wake.

There are a lot of things about SOX compliance that I don't think affect the Scrum development team directly, like making sure we have backups of data, and security controls, etc. -- more IT operations kind of things. If these things spawn off requirements for the systems we build, then of course the team has to build to those requirements (see below).

One area of SOX compliance is making sure the financial info that a company uses is consistent and correct. For software, this is more an issue with what systems are in place, and how these system store and access financial data. These types of requirements would of course feed things into Scrum backlogs, perhaps affecting the Product Owner's work, but has less to do with the development process itself.

A second area is making sure the systems, once we have the proper requirements figured out, actually function correctly when working with the financial data. Scrum does not specify testing practices, but the spirit of Scrum asks us to provide a measure of completeness for backlog items, which of course implies testing. If we use agile acceptance testing practices to build a solid, automated testing safety net around our backlog items, it makes proving the correctness of financial systems a whole lot easier, and auditing for correctness pretty straightforward. But IMHO, we have to raise our level of acceptance testing to a pretty high level (we have to raise our level of testing to a high level no matter what process, IMHO).

A third area is controlling changes to the financial software systems, so that we can prove that we're not altering the functionality or correctness of them without some level of control over those changes. Fortunately, Scrum (and most all agile processes) already provide a pretty good level of scope control via the product backlog and sprint backlog. I think we'd have to provide some more ceremony around backlog change, like sign offs or something, to provide the auditor with the evidence that changes are not being made without controls. It would put some extra hoops in place for developers when they want to refactor code -- I would guess we'd need someone on the technical team to sign off on refactorings. Maybe pairing with such an authorized person would help?

I think these general approaches also help in thinking about how to comply with other regulated environments, like FDA or HIPAA. As with those environments, many people interpret the SOX regulations as requiring a particular type of development process, but from talking with a couple of knowledgeable folks, it seems it does not -- it only asks for certain levels of controls and auditing. The problem is that a scared CEO/CFO may go overboard to protect their interests, and may end up buying into one of the expensive, heavyweight compliance solutions that dictate process things they don't need to.

As I mentioned, I didn't get to see if our investigations resulted in a SOX-compliant Scrum-like process or not. But the talks with some more knowledgeable SOX folks were promising, so I think we were headed in the right direction and it would be very possible and not too painful to pass a SOX audit with a process based on Scrum with some added formality and ceremony. Maybe we'd bit a bit less agile, but it shouldn't hurt the core agile values and strategies.

/Paul Hodgetts
Bookmark this link to this article.
Do you have feedback or questions? If so, please send us email at feedback@agilelogic.com.

Dec 31, 2004
The MSF Agile Process - Is It "Agile?"

Microsoft will soon (what "soon" means is anyone's guess ;-) be releasing their Microsoft Solutions Framework Version 4.0. Part of the MSF is a process framework that presumably the tools will help facilitate and/or enforce. They are calling this process "MSF Agile." Based on some recent discussions on the XPSoCal and Agile Project Management mailing lists, following are some thoughts on whether MSF Agile is really an "agile" process.

The MSF Version 4.0 Beta site and a link to download the MSF Agile process definition can be found at http://workspaces.gotdotnet.com/msfv4 (which is pretty sparse, so perhaps I'm missing some key details).

What concerns me the most is that there is little that emphasizes core agile strategies. Similar to RUP, it sure looks possible to be agile with the MSF Agile process. But unlike processes such as XP, Scrum and Crystal, there is little that drives me towards being agile. My concern is that many will implement a non-agile version of MSF Agile, and think what they have is agile and call it that and blame agile when it has the same old problems.

For example, the core principles of Agile boil down to getting a cross-functional team collaborating together, driven by a shared vision of their goals and outcomes, to progressively ship their product in (small) steps, gathering feedback from each step and adapting their approach along the way as needed.

MSF Agile shows us five key roles in the process. These roles are mapped to the work streams they are to perform. I find no mention that these folks collaborate on tasks at all. It looks like the process suggests that the scenarios are passed off from one role to another. The Architect seems to be the one that produces architectural definition, subsystem division and even breaks down the work into tasks, while the Developer seems to only crank out code and fix bugs. There's nothing that stops the team from taking a team of developers, and assigning them all a combined Architect/Developer role just like XP. But I feel the process definition leads us in another direction.

A shared vision is supported with a specific vision statement, and a list of scenarios is maintained, so there is some notion of the artifacts that support a shared common vision. But there is nothing in the planning, analysis or development activities that suggests how the team rallies around that vision, ala the Daily Scrum in Scrum or the Planning Game in XP.

MSF Agile does seem to be iterative. It's not clear to me if the process encourages each iteration to produce a "potentially shippable increment of product." Certainly the team could act as if the "Release a Product" work stream needs to happen each iteration, but as written the key activities needed to finalize a release are only done in that work stream, and that work stream is only shown in the last iteration.

Also, as shown, each iteration has planning activities that occur in the prior iteration as well as testing activities that occur in the next iteration. As described, this looks more like a mini-waterfall that stretches analysis/planning, design/programming and testing across three consecutive iterations. A pure agile approach would limit the leakage out of an iteration, although in most projects in which I've participated some does occur (this is called a "Type B Scrum" in the Scrum world). But it shouldn't be an entire class of activities always assigned to a prior/next iteration like this.

The planning activities are scoped to occur each iteration, which seems incremental, and there is a small mention of selecting scenarios based on priority. Agilie processes emphasize primary prioritization on business value, but I'm not clear what MSF Agile recommends as the priority criteria. Although nothing states that the planning should adapt to the results of the previous iteration, the fact that planning reoccurs each iteration seems to suggest that.

There is nothing I can find about how the team locally adapts the process, for example, a retrospective step ala Scrum. Given that the process is fairly defined and prescriptive to the level of the order of activities, required activities and how to perform the activities (e.g., see any of the work stream details), I would be concerned that it would be interpreted that local adaptation of the work streams is discouraged. There seems to be no meta work streams that focus on the team or process.

OK, that's enough. My point is that I can see how I could coach a team to be agile and probably still work inside the MSF Agile process definition, the same way I could coach a team to be agile within RUP. But I have doubts a non-agile team could take the MSF Agile process definition, and become agile using it. I'm not as concerned with the lack of specific practices and tactics like refactoring or unit testing as I am with the apparent lack of emphasis of Agile principles and strategies.

/Paul Hodgetts
Bookmark this link to this article.
Do you have feedback or questions? If so, please send us email at feedback@agilelogic.com.

Dec 03, 2004
Does XP Give a Low Priority to Architecture?

On the DSDM mailing list, one of the participants wrote:

> My chief hesitation around XP is that is seems to give a low
> priority to architecture, so that you may end up with a
> zero-bug product quickly, but the product may not meet medium-
> to long-term requirements for performance, scalability,
> reliability, maintainability, etc. because it focusses on
> meeting short-term functionality.

After having practiced XP for nearly five years now, I would say that, unfortunately, it's a misconception that XP "gives a low priority to architecture."

XP has two planning horizons -- the shorter-term iteration horizon, and the longer-term release horizon. The iteration horizon is typically one or two weeks. The release horizon is business context dependent, but is typically one to three months. It's not uncommon for a team to look even further out than a release horizon to understand longer-term product goals, even if they constrain their detailed planning activities to a smaller horizon. XP does not mandate the elimination of common sense.

Within each horizon, the team plans for a number of things, but not just task planning -- also architectural planning. It would not be possible for a team to create a good release plan unless they had developed a strategy for how they were building the system, which includes architecture. This strategy is at a level that the team feels is necessary to create their plan, which is often at a lower level of refinement than most architects are accustomed to from processes that practice more up-front architecture. I don't see this as incompatible with DSDM's iteration approach, although XP typically wouldn't have a specific Functional Model Iteration.

As more and more features are implemented, the architecture evolves to support the new feature set. XP prefers the architecture and design to be the minimal, but sufficient one to support the current feature set, but not more. This can be uncomfortable to architects accustomed to larger up-front architectural design. But the intent is to minimize the investment required to produce a deliverable feature set (i.e. maximize ROI and minimize time-to-ROI).

What balances the lesser level of up-front refinement and the effects of evolving the architecture is a continuous retrospection on the state of the architecture, and a disciplined approach to refactoring the system to continuously move the architecture to a state of "goodness." That goodness includes all the necessary "-ilities" that you mentioned. XP doesn't let even a partial system perform poorly, or scale poorly, or lack needed maintenance documentation, etc.

This is not an easy thing to do, requiring the developers to be able to frequently take that step back from the code they are writing in support of the short-term feature implementation to see the bigger architectural picture. It's sometimes easy to get stuck in a local maxima of design and not see a larger improvement that can be made. IMHO, not all developers have nurtured this skill set. But it's similar with up-front architectural activities -- they are not easy either and require a skill set that not all developers have nurtured.

So I would argue that XP, in fact, places a *continuous* emphasis and priority on architecture. XP places a very high value on the quality of the architecture, design and code, and mandates the refactoring practice to ensure the team always retrospects on that quality and moves the system back to an acceptable level of goodness when it strays. Normally this is done in a series small refactoring steps, but it is sometimes necessary to do a larger refactoring step to break out of a local design maxima.

It's unfortunate that this misconception is prevalent, but perhaps not surprising. Most of the refactoring writings focus on smaller-scale refactorings that are intended to improve code quality or local design quality. It's only recently, through books like Joshua Kerievsky's "Refactoring to Patterns," that larger-scale refactoring is being publicized. But XP gurus like Ron Jeffries have been discussing this for years on the XP mailing list, so it's not a new concept and has been part of mainstream XP practice for some time. I've also seen teams try XP without proper refactoring, essentially using XP as a project management wrapper for a "code-like-hell" development process, but that's not what XP is about.

/Paul Hodgetts
Bookmark this link to this article.
Do you have feedback or questions? If so, please send us email at feedback@agilelogic.com.

Sep 21, 2004
Benefits of Pair Programming

I've worked with a number of teams that have a heavy resistance to pair programming. The resistence is typically justified using the argument that it seems like a waste of resources to have two people work on one task. I'm not one to dictate what practices a team should adopt (although I'm not afraid to share my somewhat strong opinions ;-), but I suggest considering these potential benefits of pair programming before deciding not to adopt it:

  • Multiple sources of problem solving input avoids most instances of getting stuck.
  • Multiple sources of problem solving input provides more solutions from which to choose.
  • Partner can add element of parallelism (e.g., look up API method while pair compiles code).
  • Partner continuously reviews work products for correctness, conformance to standards, etc.
  • Partner continuously reviews process for conformance.
  • Continuous review increases odds of catching mistakes, misunderstandings early.
  • Continuous review reduces odds of lengthy pursuits of incorrect or inappropriate solutions.
  • Allows sources of specialty knowledge to rotate and be available to more situations.
  • Cross-training occurs at an accelerated rate (proportional to frequency of pair rotation).
  • Mentoring occurs at an accelerated rate (proportional to distribution of pair partners).
  • Decreases risk that individual developer mistakes will damage significant parts of the system.
  • Reduces dependencies of specific individuals to parts of the system or technologies.
  • Increased overall involvement enables higher degree of team self-management.
  • Increases social interaction, camaraderie among team members.
  • Common goals of pairs facilitates team building.
  • Common successes shared by pairs increases team accomplishment.
  • Increased sharing of responsibilities facilitates team culture evolution.
  • Partner provides motivation to stay continuously engaged.
  • Natural "competition" encourages higher performance levels.
  • Partner provides sanity check for knowing when to take breaks or stop for the day.
  • Reduces the need for expensive development workstations by half.

/Paul Hodgetts
Bookmark this link to this article.
Do you have feedback or questions? If so, please send us email at feedback@agilelogic.com.

Hill Climbing and Hill Searching

Micro-focused refactoring is a strategy like hill climbing. When done well, it leads to the best (or sufficient) local design (peak of the hill).

The localized hill climbing strategy doesn't, however, ensure that you're climbing the highest hill. We may arrive at the best design for the given context (the peak of the hill), but not at the optimal, or at least an overall acceptable, design (highest hill). (This is referred to as a "local maximum.")

If the wrong hill is chosen, there is some cost in retracing our steps back down some or all of the hill (rework). The higher the hill and the more we climb before noticing there are higher hills, the greater the added journey to get to the better hill (the more complex the problem and the deeper into the implementation we get, the more the rework costs). We can only hope we don't run out of daylight before we can climb back down and ascend the better hill (analogy left to the reader ;-).

Macro-focused design is a strategy like hill searching. It provides a better vantage point to look for the highest hill (optimal, or at least better, design).

However, we need to have a good view of the mountain range (context) in order to search for the highest hill. If we have a clearly known context, with a known set of hills to search (known set of design solutions, a.k.a. patterns), we can find and choose the optimal design solution/pattern (hill to climb).

Because of the cost of retracing our steps (rework), and the amount of daylight left (resources), it's a good idea if we can find a vantage point and scan for the higher hills before we start climbing. We could save a lot of climbing and retracing this way. By recognizing the obvious contexts of our problem, and by researching the canonical solutions (patterns), we can significantly reduce the risk of rework.

If we don't have such a well defined context, with canonical solutions, then searching for the highest hill can be speculative and a waste of time. In these cases, it's probably better to just pick the highest hill we can see and start climbing. Maybe on the way up we'll notice a higher hill (learn something new), or determine we're on a good hill after all (good enough). But there's really not many hills that haven't been climbed these days.

Once chosen, the highest hill is still climbed a step at a time (with a micro-focused, test-first refactoring strategy). We take the most direct route, pausing for refreshment as needed, and stop when we reach the peak (sufficient solution). It's a good idea to stop occasionally and take a look around as well, just in case there's a higher hill that's appeared over the horizon. Sometimes we find that the hill has a little slope attached to another hill, and we end up on a different hill than we thought we would at first.

When hill searching, the ideal case is to choose the best hill (optimal design). But we also want to be conservative in choosing. There's no point in choosing a 12,000 ft. hill when we only need 9,000 ft. of altitude. The design choice should be the simplest, but sufficient, one that fits the known requirements.

Especially when there's uncertainty, either about the context or requirements, choosing smaller hills (simpler designs) reduces risk. If the design turns out inadequate, there's less to rework. If it ends up doing the job, it should be one of the simpler (and cheaper) solutions, and we win for doing less climbing (work).

When there's great uncertainty, we can just start climbing any convenient hill, one small step at a time. As we climb, it would be prudent to keep scouting just how high the hill we're climbing really is, and if there are any promising adjacent hills, before we get too far up. If we need to, we can always send up a lone climber to check out the route before we send the whole expedition up there (design spike). Being adaptive means being able and willing to change course, even double back, at any time (that courage thing).

Climbing in pairs can reduce risk. While one climber is focusing on the hand holds and making sure the ropes are secured, the other can be scouting the route and scanning the horizon for better hills. When things get dicey, they can consult. If one climber slips, the other can grab the rope and pull them up.

/Paul Hodgetts
Bookmark this link to this article.
Do you have feedback or questions? If so, please send us email at feedback@agilelogic.com.